Relax then Compensate: On Max-Product Belief Propagation and More

نویسندگان

  • Arthur Choi
  • Adnan Darwiche
چکیده

We introduce a new perspective on approximations to the maximum a posteriori (MAP) task in probabilistic graphical models, that is based on simplifying a given instance, and then tightening the approximation. First, we start with a structural relaxation of the original model. We then infer from the relaxation its deficiencies, and compensate for them. This perspective allows us to identify two distinct classes of approximations. First, we find that max-product belief propagation can be viewed as a way to compensate for a relaxation, based on a particular idealized case for exactness. We identify a second approach to compensation that is based on a more refined idealized case, resulting in a new approximation with distinct properties. We go on to propose a new class of algorithms that, starting with a relaxation, iteratively seeks tighter approximations.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dual Decomposition from the Perspective of Relax, Compensate and then Recover

Relax, Compensate and then Recover (RCR) is a paradigm for approximate inference in probabilistic graphical models that has previously provided theoretical and practical insights on iterative belief propagation and some of its generalizations. In this paper, we characterize the technique of dual decomposition in the terms of RCR, viewing it as a specific way to compensate for relaxed equivalenc...

متن کامل

Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference

We propose an approach to lifted approximate inference for first-order probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified first-order model, which is found by relaxing first-order constraints, and then compensating for the relaxation. These simplified models can be incrementally improved by carefully recovering constraints that ...

متن کامل

Approximating Weighted Max-SAT Problems by Compensating for Relaxations

We introduce a new approach to approximating weighted Max-SAT problems that is based on simplifying a given instance, and then tightening the approximation. First, we relax its structure until it is tractable for exact algorithms. Second, we compensate for the relaxation by introducing auxiliary weights. More specifically, we relax equivalence constraints from a given Max-SAT problem, which we ...

متن کامل

Relax, Compensate and Then Recover

We present in this paper a framework of approximate probabilistic inference which is based on three simple concepts. First, our notion of an approximation is based on “relaxing” equality constraints, for the purposes of simplifying a problem so that it can be solved more readily. Second, is the concept of “compensation,” which calls for imposing weaker notions of equality to compensate for the ...

متن کامل

KAIST CS 774 Markov Random Field : Theory and Application Sep 10 , 2009 Lecture 3

In this lecture, we study the Belief propagation algorithm(BP) and the Max Product algorithm(MP). Last lecture reminds us of that in MRF, computing the marginal probabilities of random variables and Maximum A Posteriori(MAP) assignment is important. The Belief Propagation algorithm is a popular algorithm that is used to compute marginal probability of random variables. Max Product algorithm is ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009